13 research outputs found

    The Incremental Multiresolution Matrix Factorization Algorithm

    Full text link
    Multiresolution analysis and matrix factorization are foundational tools in computer vision. In this work, we study the interface between these two distinct topics and obtain techniques to uncover hierarchical block structure in symmetric matrices -- an important aspect in the success of many vision problems. Our new algorithm, the incremental multiresolution matrix factorization, uncovers such structure one feature at a time, and hence scales well to large matrices. We describe how this multiscale analysis goes much farther than what a direct global factorization of the data can identify. We evaluate the efficacy of the resulting factorizations for relative leveraging within regression tasks using medical imaging data. We also use the factorization on representations learned by popular deep networks, providing evidence of their ability to infer semantic relationships even when they are not explicitly trained to do so. We show that this algorithm can be used as an exploratory tool to improve the network architecture, and within numerous other settings in vision.Comment: Computer Vision and Pattern Recognition (CVPR) 2017, 10 page

    Speeding up Permutation Testing in Neuroimaging

    Full text link
    Multiple hypothesis testing is a significant problem in nearly all neuroimaging studies. In order to correct for this phenomena, we require a reliable estimate of the Family-Wise Error Rate (FWER). The well known Bonferroni correction method, while simple to implement, is quite conservative, and can substantially under-power a study because it ignores dependencies between test statistics. Permutation testing, on the other hand, is an exact, non-parametric method of estimating the FWER for a given α\alpha-threshold, but for acceptably low thresholds the computational burden can be prohibitive. In this paper, we show that permutation testing in fact amounts to populating the columns of a very large matrix P{\bf P}. By analyzing the spectrum of this matrix, under certain conditions, we see that P{\bf P} has a low-rank plus a low-variance residual decomposition which makes it suitable for highly sub--sampled --- on the order of 0.5%0.5\% --- matrix completion methods. Based on this observation, we propose a novel permutation testing methodology which offers a large speedup, without sacrificing the fidelity of the estimated FWER. Our evaluations on four different neuroimaging datasets show that a computational speedup factor of roughly 50×50\times can be achieved while recovering the FWER distribution up to very high accuracy. Further, we show that the estimated α\alpha-threshold is also recovered faithfully, and is stable.Comment: NIPS 1

    Randomized Denoising Autoencoders for Smaller and Efficient Imaging Based AD Clinical Trials

    No full text
    Abstract. There is growing body of research devoted to designing imaging-based biomarkers that identify Alzheimer's disease (AD) in its prodromal stage using statistical machine learning methods. Recently several authors investigated how clinical trials for AD can be made more efficient (i.e., smaller sample size) using predictive measures from such classification methods. In this paper, we explain why predictive measures given by such SVM type objectives may be less than ideal for use in the setting described above. We give a solution based on a novel deep learning model, randomized denoising autoencoders (rDA), which regresses on training labels y while also accounting for the variance, a property which is very useful for clinical trial design. Our results give strong improvements in sample size estimates over strategies based on multi-kernel learning. Also, rDA predictions appear to more accurately correlate to stages of disease. Separately, our formulation empirically shows how deep architectures can be applied in the large d, small n regime -the default situation in medical imaging. This result is of independent interest

    Fundus image registration for vestibularis research

    No full text
    In research on vestibular nerve disorders, fundus images of both left and right eyes are acquired systematically to precisely assess the rotation of the eye ball that is induced by the rotation of entire head. The measurement is still carried out manually. Although various methods have been proposed for medical image registration, robust detection of rotation especially in images with varied quality in terms of illumination, aberrations, blur and noise still is challenging. This paper evaluates registration algorithms operating on different levels of semantics: (i) data-based using Fourier transform and log polar maps; (ii) point-based using scaled image feature transform (SIFT); (iii) edge-based using Canny edge maps; (iv) object-based using matched filters for vessel detection; (v) scene-based detecting papilla and macula automatically and (vi) manually by two independent medical experts. For evaluation, a database of 22 patients is used, where each of left and right eye images is captured in upright head position and in lateral tilt of ±20 0. For 66 pairs of images (132 in total), the results are compared with ground truth, and the performance measures are tabulated. Best correctness of 89.3 % were obtained using the pixel-based method and allowing 2.5 ◦ deviation from the manual measures. However, the evaluation shows that for applications in computer-aided diagnosis involving a large set of images with varied quality, like in vestibularis research, registration methods based on a single level of semantics are not sufficiently robust. A multi-level semantics approach will improve the results since failure occur on different images
    corecore